skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Sohrawardi, Saniat"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The evolving landscape of manipulated media, including the threat of deepfakes, has made information verification a daunting challenge for journalists. Technologists have developed tools to detect deepfakes, but these tools can sometimes yield inaccurate results, raising concerns about inadvertently disseminating manipulated content as authentic news. This study examines the impact of unreliable deepfake detection tools on information verification. We conducted role-playing exercises with 24 US journalists, immersing them in complex breaking-news scenarios where determining authenticity was challenging. Through these exercises, we explored questions regarding journalists’ investigative processes, use of a deepfake detection tool, and decisions on when and what to publish. Our findings reveal that journalists are diligent in verifying information, but sometimes rely too heavily on results from deepfake detection tools. We argue for more cautious release of such tools, accompanied by proper training for users to mitigate the risk of unintentionally propagating manipulated content as real news. 
    more » « less
  2. Clickbait headlines work through superlatives and intensifiers, creating information gaps to increase the relevance of their associated links that direct users to time-wasting and sometimes even malicious websites. This approach can be amplified using targeted clickbait that takes publicly available information from social media to align clickbait to users’ preferences and beliefs. In this work, we first conducted preliminary studies to understand the influence of targeted clickbait on users’ clicking behavior. Based on our findings, we involved 24 users in the participatory design of story-based warnings against targeted clickbait. Our analysis of user-created warnings led to four design variations, which we evaluated through an online survey over Amazon Mechanical Turk. Our findings show the significance of integrating information with persuasive narratives to create effective warnings against targeted clickbait. Overall, our studies provide valuable insights into understanding users’ perceptions and behaviors towards targeted clickbait, and the efficacy of story-based interventions. 
    more » « less